AAAI.2019 - Cognitive Systems

Total: 5

#1 Human-Like Sketch Object Recognition via Analogical Learning [PDF] [Copy] [Kimi]

Authors: Kezhen Chen ; Irina Rabkina ; Matthew D. McLure ; Kenneth D. Forbus

Deep learning systems can perform well on some image recognition tasks. However, they have serious limitations, including requiring far more training data than humans do and being fooled by adversarial examples. By contrast, analogical learning over relational representations tends to be far more data-efficient, requiring only human-like amounts of training data. This paper introduces an approach that combines automatically constructed qualitative visual representations with analogical learning to tackle a hard computer vision problem, object recognition from sketches. Results from the MNIST dataset and a novel dataset, the Coloring Book Objects dataset, are provided. Comparison to existing approaches indicates that analogical generalization can be used to identify sketched objects from these datasets with several orders of magnitude fewer examples than deep learning systems require.

#2 Attentive Tensor Product Learning [PDF] [Copy] [Kimi]

Authors: Qiuyuan Huang ; Li Deng ; Dapeng Wu ; Chang Liu ; Xiaodong He

This paper proposes a novel neural architecture — Attentive Tensor Product Learning (ATPL) — to represent grammatical structures of natural language in deep learning models. ATPL exploits Tensor Product Representations (TPR), a structured neural-symbolic model developed in cognitive science, to integrate deep learning with explicit natural language structures and rules. The key ideas of ATPL are: 1) unsupervised learning of role-unbinding vectors of words via the TPR-based deep neural network; 2) the use of attention modules to compute TPR; and 3) the integration of TPR with typical deep learning architectures including long short-term memory and feedforward neural networks. The novelty of our approach lies in its ability to extract the grammatical structure of a sentence by using role-unbinding vectors, which are obtained in an unsupervised manner. Our ATPL approach is applied to 1) image captioning, 2) part of speech (POS) tagging, and 3) constituency parsing of a natural language sentence. The experimental results demonstrate the effectiveness of the proposed approach in all these three natural language processing tasks.

#3 Scalable Recollections for Continual Lifelong Learning [PDF] [Copy] [Kimi]

Authors: Matthew Riemer ; Tim Klinger ; Djallel Bouneffouf ; Michele Franceschini

Given the recent success of Deep Learning applied to a variety of single tasks, it is natural to consider more human-realistic settings. Perhaps the most difficult of these settings is that of continual lifelong learning, where the model must learn online over a continuous stream of non-stationary data. A successful continual lifelong learning system must have three key capabilities: it must learn and adapt over time, it must not forget what it has learned, and it must be efficient in both training time and memory. Recent techniques have focused their efforts primarily on the first two capabilities while questions of efficiency remain largely unexplored. In this paper, we consider the problem of efficient and effective storage of experiences over very large time-frames. In particular we consider the case where typical experiences are O(n) bits and memories are limited to O(k) bits for k << n. We present a novel scalable architecture and training algorithm in this challenging domain and provide an extensive evaluation of its performance. Our results show that we can achieve considerable gains on top of state-of-the-art methods such as GEM. 1

#4 Simulation-Based Approach to Efficient Commonsense Reasoning in Very Large Knowledge Bases [PDF] [Copy] [Kimi]

Authors: Abhishek Sharma ; Keith M. Goolsbey

Cognitive systems must reason with large bodies of general knowledge to perform complex tasks in the real world. However, due to the intractability of reasoning in large, expressive knowledge bases (KBs), many AI systems have limited reasoning capabilities. Successful cognitive systems have used a variety of machine learning and axiom selection methods to improve inference. In this paper, we describe a search heuristic that uses a Monte-Carlo simulation technique to choose inference steps. We test the efficacy of this approach on a very large and expressive KB, Cyc. Experimental results on hundreds of queries show that this method is highly effective in reducing inference time and improving question-answering (Q/A) performance.

#5 Modelling Autobiographical Memory Loss across Life Span [PDF] [Copy] [Kimi]

Authors: Di Wang ; Ah-Hwee Tan ; Chunyan Miao ; Ahmed A. Moustafa

Neurocomputational modelling of long-term memory is a core topic in computational cognitive neuroscience, which is essential towards self-regulating brain-like AI systems. In this paper, we study how people generally lose their memories and emulate various memory loss phenomena using a neurocomputational autobiographical memory model. Specifically, based on prior neurocognitive and neuropsychology studies, we identify three neural processes, namely overload, decay and inhibition, which lead to memory loss in memory formation, storage and retrieval, respectively. For model validation, we collect a memory dataset comprising more than one thousand life events and emulate the three key memory loss processes with model parameters learnt from memory recall behavioural patterns found in human subjects of different age groups. The emulation results show high correlation with human memory recall performance across their life span, even with another population not being used for learning. To the best of our knowledge, this paper is the first research work on quantitative evaluations of autobiographical memory loss using a neurocomputational model.